ViSiCAST Deliverable D5-1: Interface Definitions

ثبت نشده
چکیده

This report describes the languages defined to interface between various components which are developed within work-package 5. At a later point of time, these languages will also become visible to other components in other work-packages as well as to users of the authoring environment. Chapter 1 gives an introduction and relates the following chapters to each other. Ch. 2 describes the language chosen to define semantic contents of phrases to be translated into sign language, mainly based on Discourse Representation Theory. Ch. 3 documents the revisions that have been made to HamNoSys, a notation system mainly for manual components of sign languages, in order to meet the requirements of the project. Ch. 4 describes the notation systems defined for non-manual aspects of sign language. Ch. 5 builds on the latter two to describe the form-related part of the lexicon of signs to be used in the work-package. Deliverable Number: D5-1 Interface Definitions 2 of 69 Revision History Revision 1 (October 2001) • Replaced graphics in ch. 4 by higher-resolution pictures Deliverable Number: D5-1 Interface Definitions 3 of 69 1 Introduction One of the goals of the ViSiCAST project is to provide translation from English into sign language. As English on the one hand and the three target languages (British, Dutch, and German Sign Language) on the other hand are not closely related, it is our belief that only a deep translation can provide goodquality output. Therefore, we set up a system that extracts the meaning of the English input sentences and uses the resulting semantic representation to drive the generation process for the target sign language: By making the intermediate semantic representation as language-neutral as possible or even biased towards sign language, we aim to reduce the spoken-language influence on sign language production. The semantics representation formalism suggested here is based on Discourse Representation Theory as this theory makes structures explicit that are of major importance for sign language production. The formalism is presented in chapter 2. A minimal translation system between two oral languages translates text into text. In the case of ViSiCAST, the situation is somewhat different as there is no established written form for sign languages. Therefore, the system output is computer-animated signing, using the avatar technology developed in parallel in another ViSiCAST work-package. This animation will be driven by a description of signed sentences encoded in an XML compliant language called SiGML (Signing Gesture Markup Language). Functionally, SiGML is a superset of HamNoSys, the Hamburg Notation System for sign languages. The basic idea of SiGML, which will be described in extenso as Deliverable D5-2, is a timed and synchronized multi-tier representation of signed utterances where each tier encodes one of the parallel information channels signing makes use of: the hands, the body, the head, facial expressions, mouthing, and eye gaze. The manual aspects of signs are described by HamNoSys notations. A new version of HamNoSys has been developed for the ViSiCAST project. Chapter 3 of this document describes in detail all the additions and revisions made to the predecessor, version 3. Knowledge of the earlier versions is therefore required. A self-contained documentation of the new version 4 will be published separately. Non-manual aspects of signs can only partially be described in HamNoSys. Facial expressions, for example, would require a substantial number of new concepts to be introduced into HamNoSys. As multitier descriptions become readily available in language technology, a complete integration into HamNoSys, in the sense that a sign notation still consists of a string of characters, seems to be neither necessary nor desirable. Instead, the approach taken presents a number of coding sets for individual nonmanual aspects and leaves it to the SiGML level to integrate them into composed events. This approach has the additional benefit of being easily extensible should it turn out that new codes for other sign languages or even our target sign languages become necessary, without affecting the much more mature coding of the manual aspects. Chapter 4 outlines the current proposal. In the language generation process, the lexicon of the target language plays a central role. This is even more so for constraint-based grammar environments such as HPSG (Head-driven Phrase Structure Grammar) where in fact the major part of the knowledge about the language is stored in the lexicon instead of the grammar. The way sign languages make use of space renders it infeasible to store each inflected form of certain sign language verbs as separate entries in the lexicon. It is therefore necessary to follow the approach where lexicon entries can be inflected by rules to produce the form required in a given context. Chapter 5 outlines the structures and mechanisms necessary to produce a token that is then describable by means of the notations presented in chapters 3 and 4. 1 A number of systems have been proposed, but even the probably most successful among them, SignWriting (see Sutton 1999), is not used within a larger community as an everyday writing system. 2 This approach ensures that the system output can be judged by native users of the target language, a top priority for a faithful evaluation of the system. Deliverable Number: D5-1 Interface Definitions 4 of 69 So far, the languages presented have been described to interface between different components of the translation system. However, any of these can also become visible and editable to the (advanced) user of such a system: As machine translation is a very complex field, and is even more so with less intensively researched target languages involved, one has to be very careful to define restrictions that reduce the complexity and thereby allow the project to reach its goal within a defined timeframe. Often, this is accomplished by defining a domain or a number of domains the translation system will be used in. In ViSiCAST, we take this approach as well. But in addition we suggest a system where the user can intervene wherever the automatic process does not provide correct results. As the output of the translation system cannot be described conveniently and time-efficiently even by an experienced translator, the use of the system makes sense even if manual intervention is necessary. One possible scenario for this is the task to translate a web page into sign language. The translator uses the system by feeding in the source text, reviewing the output and modifying it where necessary. When finished, the SiGML description of the signed text is linked to the web page where it can be found by signing assistants. The following description outlines how such an integrated translation environment might look like. It should become obvious where the user can take advantage of the systems described in the following chapters of this document. From a user’s point of view, the translation process can be modelled as five steps, with the four inner states being editable to achieve the desired results. The user may freely move back and forth between the different states: Clicking on a tab further to the right implicitly starts the necessary compilation processes. Clicking on a tab to the left allows the user to go back to an earlier state to intervene there, thereby possibly voiding decisions taken for the states further down the line. 3 Signing assistants in the web contexts are handled in work-package 2 of the ViSiCAST project. English Syntax Semantics Morpho logy SiGML Anima tion Deliverable Number: D5-1 Interface Definitions 5 of 69 In the first state, English text is entered, either by typing it in or by pasting it in from some other source. In the second state, syntactic analysis of the input is presented to the user. The user can select from multiple readings and check the part of speech assignments. 4 Graphics footage for this and the next screen dump have been provided by Ian Marshall and Eva Safar from University of East Anglia. Deliverable Number: D5-1 Interface Definitions 6 of 69 In the third state, the semantics of the input are presented. The top field contains word senses (WordNet), in which the user can pick the right word sense to avoid ambiguity. Co-references suggested by the system can be corrected here as well. The next step translates Discourse Representation Structures into a morphological description of the output. Obviously, this is where most of the sign language generation takes place. The state after this step, Morphology, also seems to be the easiest spot to intervene in the generation process, hence we will later detail the operations the user should have available here. Deliverable Number: D5-1 Interface Definitions 7 of 69 From the string of morpheme complexes, the SiGML representation of the output will be created. If necessary, the user can edit in this view as well although it is expected that sign language translators will prefer the morphemes view instead of having to deal with the technical details of an XML structure. The final state is the end-product, the animation. If the animation shows problems, the user may go back to one of the earlier states to manipulate the data. If the result is as intended, the user can go back to the SiGML representation to copy the output to some other document. Now to the central editing environment: the morphemes view. We consider it the most convenient place to intervene both for linguists and native signers as this view is quite close to the signing stream and communicates with the user in terms that are familiar or can become familiar to him/her. Deliverable Number: D5-1 Interface Definitions 8 of 69 The sign stream goes from left to right, with one column per sign. The top two tiers describe the sign in its entirety, through the animation (with a thumbnail as placeholder) and through the HamNoSys notation. Clicking on the thumbnail plays the animation once. The gloss is the key to the “stems” called from the lexicon whereas the following tiers show additional morphemes which are included in the sign. Colour-codes may be used for co-references. The last tier in this sample shows suprasegmental information. This is not restricted to whole phrases, but may apply to any stretch of signs. There will definitely be more than one such tier. Signs can be inserted, deleted, or rearranged in their order. Signs can be modified by assigning values to one of the tiers below LEX. The lexicon entry decides whether and how a certain tier can be filled for a particular sign. Some of the editors used for assigning values to the cells may be rather complex. For example, for the loc/src/gol morphemes, the user may select a position from a map of the horizontal plane in front of the signer on which previously used positions are marked. Inserting signs requires the user to identify a sign first. Several methods should be available: • Search by gloss • Search by meaning (browsing WordNet) • Search by form (entering HamNoSys for the citation form) In each case, partial entries may result in browsing lists from which the desired sign can be selected. Deliverable Number: D5-1 Interface Definitions 9 of 69 Alternatively, the user may choose to fully describe the sign by giving a HamNoSys notation. This may be achieved by typing in HamNoSys symbols or by a syntax-guided editor. Suprasegmental features can be inserted and deleted as well. Additionally, they can be extended or shrunk sign by sign (using conventions for sub-sign timing). Thomas Hanke, University of Hamburg, mailto:[email protected] Deliverable Number: D5-1 Interface Definitions 10 of 69 2 Semantics interface language 2.1 Sign Language Characteristics The aim of the Natural Language Processing component of Work-Package 5 is to allow semi-automatic manipulation of English text to a representation oriented towards signed presentation (cf. Safar & Marshall, submitted). The overall architecture focuses on the use of Discourse Representation Structures (DRSs) as an inter-lingua in which sufficient semantic information is extracted from the input text or is volunteered by the user to support subsequent sign generation. The DRS representation is sufficiently rich to have removed lexical, structural and semantic ambiguity without over-burdening the English NLP component with pragmatic issues and world-knowledge inferencing which, if needed, must be volunteered by the user. The DRS representation isolates a number of semantic domains (events, states, nominal elements involved in these, locational and temporal information, etc) as well as indicating significant syntactic information (esp. sentence type). Sign Language synthesis is achieved by a subsequent conversion to a HPSG representation in which components of the DRS are regrouped into appropriate morphological components. These DRSs capture characteristics that are significant in sign languages. In particular : (i) pronominal reference, more general co-reference and placement. Repeated reference to the same individuals in a text can be replaced by references to positions in signing space. The DRS representation makes explicit anaphoric references by associating the same variable with multiple references across sentences/propositions. The sign space planner will then manage consistent allocation of such variables to significant positions in signing space. (ii) organisation of the back-end dictionary as a ‘SignNet’ analogous to WordNet gives the potential of using classifier shapes as pronominal references inside signs for verbs which incorporate subject/object information. In BSL, such ‘proforms’ are usually associated with information about verb object roles. (iii) BSL signals temporal information significantly differently from English. In particular, English tenses are not signaled in a comparable way in sign language. Hence, the semantic representation should be as accurate as possible with respect to temporal information to allow conversion to sign language using e.g. appropriate time lines (such as BSLs three/four major time lines). (iv) BSL makes a significant grammatical distinction between a single event involving a group of objects and a repetitive event involving single objects. For example, the ambiguity of The lecturer spoke to the students. as either The lecturer spoke individually to each student. or The lecturer spoke to the students collectively. needs to be resolved in order to appropriately sign one of these alternatives. DRSs allow the explicit representation of the set and individuals of the set. It may be possible to determine from the surrounding context which of these is appropriate or may require human intervention, however the representation will have required that the ambiguity is resolved. (v) Topic Comment Structure. Sign languages are reputed to have a topic-comment structure, hence we will look to deploy techniques which detect sentential and discourse topic. In the DRS this will be realised Deliverable Number: D5-1 Interface Definitions 11 of 69 by sentential indication of the new information by a predicate ‘comment’ which can be in later synthesise to organise the order of sign delivery. The following characterisation is largely based upon the formulation of DRSs of Kamp and Reyle (Kamp & Reyle 1993, van Eijck & Kamp 1997) Labelling of propositions is an adaptation taken from Kamp & Reyle (1993), though in van Eijck & Kamp (1997) these are presented as an additional argument to each predicate. In general terms, this makes provision for their interval temporal framework coupled with first order predicate logic and extensions to allow sets of objects as arguments to predicates, plus any further extensions we might consider desirable. For example the labeling of attributive propositions merely extends Kamp and Reyle’s (1993) notation but allows the possibility of [attr1:big(X) and very(attr1)] as ways of handling some modifiers. From a logical viewpoint this looks suspect but from a practical point of view, BSL facial expressions associated with intensity could be associated with such higher order predicates. The description makes no provision for ease of identification of particular kinds of proposition, though we could agree to group these appropriately (e.g. all temporal relation propositions together etc.) . The remainder of the chapter is structured as • descriptions of how syntactic forms are realised in the DRS and hence the associated semantic form. • example sentences and their DRS forms (generated by existing tools and with deviations from the descriptions here indicated) • formulation of the DRS structure as a BNF description 2.2 Realisations of syntactic constructions in the DRS Where possible illustrative cases are taken from the kitchen world example sentences. 2.2.1 Nominal constituents 1. Nouns are realised as a one place predicate (the noun itself) whose argument is a referent vn, with a label an, e.g. glass e.g. a(2):glass(v(2)) 2. Plural nouns are similarly realised but with an additional relational expression for indefinite and numeric quantification, e.g. plates e.g. a(12):plate(v(16)), c(0) = count(v16)), c(0) > 1 3. In the case of numeric quantification the quantity is specified, e.g. five plates e.g. a(12):plate(v(16)), c(0) = count(v16)), c(0) = 5 [Currently, we do not process plurals like this and in example below they appear simply as e.g. a(12):plates(v(16)) ] Deliverable Number: D5-1 Interface Definitions 12 of 69 4. Adjectives are realised as a one place predicate (the adjective itself) whose argument is a referent vn, with a label attrn, e.g. big e.g. attr(4):big(v(14)) 5. Determiners and quantifiers are treated similarly as in Safar & Marshall (submitted). This means that some quantifiers introduce a new sub-DRS (duplex condition), but predicates like ‘exists’ and ‘forall’ are also used with label qn being a quantification label. a. every, all e.g. [] [ [Ux] [ q(0):forall(v(x)) Condx ] > [] [ Condx2 ] ] b. a, an, some, any (for distinction between plural and singular indefinite determiner see 3. above) eg: [Ux [ q(1):exists(v(x)) Condx ] c. numeric quantifiers (see 3. above) d. no / none see also numeric quantifiers above. e.g. a(12):plate(v(16)), c(0) = count(v16)), c(0) = 0 e. the e.g. [Ux] [ Condx ] 6. Pronouns a. subject/object In the DRS they are realized like nouns. Deliverable Number: D5-1 Interface Definitions 13 of 69 e.g.: a(31):she(v(41)) The resolution of pronouns happens after the DRS creation: a(31):she(v(41))/a(33):Susan(v(42)) b. reflexive This point still has to be considered c. here/there These pronouns are analyzed as locational verb modifiers. e.g.: l(1):here(e(2), v(4)) d. one This pronoun is resolved in the following way: a(0):cup(v(0)) a(1):one(v(1))/a(1):cup(v(1)) The predicate ‘one’ has to be resolved to ‘cup’, where they have different arguments/referents from the co-referential term (hence retain the variable associated with the original ‘one’ predicate). 2.2.2 Verbal constituents 7. Intransitive verbs are realised as a one place predicate (the verb itself) whose argument is a referent vn, with a label en, e.g. walk e.g. e(4):walk(v(13)) 8. Transitive verbs are realised as a two place predicate (the verb itself) whose arguments are referents vn, vx, (n not equal x) with a label en, e.g. place e.g. e(2):place(v(15), v(16)) 9. 3 argument verbs are realized as 2 argument predicates like transitive verbs before (referents vn, vx,). The indirect objects are realized like prepositional phrases with referents em and vz. e.g.: e(2):give(v(4), v(5)), a(6):Peter(v(6)), l(2):to(e(2), v(6)) 10. Verbs involving particles are realised as a one or two place predicate (the verb+particle itself) whose arguments are referents vn, vx, with a label en, e.g. put his coat on / put on his coat e.g. e(2):put_on(v(15), v(16)) 11. Verb forms and auxiliary verbs are realised for their temporal information as a temporal one place (when) whose argument is the event (state?) en, sn?, with a label en, e.g. placed e.g. t(8):when(e(2)), t(8) within) ln:into inside into into the house directional (outside -> within) ln:into into into the lamppost collide with ln:collide_with from walked from the house directional ln:from out of walked out of the house directional (within -> outside) ln:out_of above above the box loc ln:above below below the box loc ln:below by by the box loc ln:by by passive subject/agent pn:by among among the boxes loc ln:among with with his aunt ln:with without without his hat ln:without N.B. For the ambiguity of in/into/away ‘ran’ may be atypical with John ran into Mary (= meet) John ran away from his parents (= absconded) Possible solutions at this stage to query the use for CMU MVp in ask if (in=inside), or (=from outside to within) (though possible unnatural for ‘ran in the garden’) for CMU MVp into ask if (into = collided with) 19. Relative Clauses Relative clauses are represented in the main DRS, where the missing subject or object is resolved to the referent, which the clause refers to. This is indicated in the argument structure of the predicate of the embedded clause. Object-type relative clause: E.g.: The pan which he was using has disappeared. 5 under, beneath, beside, between, behind, in front of, opposite, near all comparable with above/below Deliverable Number: D5-1 Interface Definitions 17 of 69 [v(0), v(1)] [ a(0):he(v(1)) t(0):when(e(0)) t(0)=cont t(0) drs([v(1), v(2)], [q(1):exists(v(1)), a(1):woman(v(1)), t(0):when(e(0)), t(0)=now, e(0):love(v(0), v(1)), q(2):exists(v(2)), a(2):park(v(2)), l(0):in(e(0), v(2))]) ]) (Variable and proposition labels as ‘lab(n)’ rather than ‘labn’ is merely due to convenience of processing currently. Currently only referent variables v1,v2 are listed in the variable lists). 2.3.3 Example illustrating resolution of anaphora Put the pot from the oven into the cupboard. Is it dirty? [ [ a(4):it(v(4))/a(1):pot(v(1)) ] co-reference of a4 and a1 (v4 and v1) [v(0), v(1), v(2), v(3)] Put the pot from the oven into the cupboard. [ a(0):you(v(0)) a(1):pot(v(1)) t(0):when(e(0)) t(0)=now e(0):put(v(0), v(1)) a(2):oven(v(2)) l(0):from(e(0), v(2)) a(3):cupboard(v(3)) l(1):into(e(0), v(3)) ] [v(4), v(5)] Is it dirty? [ Deliverable Number: D5-1 Interface Definitions 22 of 69 a(4):it(v(4)) attr(0):dirty(v(5)) qu(0):yesno(s(0)) t(1):when(s(0)) t(1)=now s(0):be(v(4), v(5)) ] ] 2.4 DRS BNF / Ontology DRSmultsent ::[ VariableBindings , [DRSsent , DRSsent ]] VariableBindings ::[NominalAttributiveProposition / NominalAttributiveProposition ] (currently only provision for coreference of nominals, to consider later anaphoric relationships with events, etc) DRSsent ::drslabel : [ VariableList , [SententialProposition, LabeledPropositions ]] SententialProposition ::ImpVar : imperative ( Evar ) | QuVar : yesno ( [ Evar | Svar ] ) | QuVar : whPred1 ( [ Evar | Svar ] ) | QuVar : whPred2 ( Var ) CommVar : comment ( Var ) comment( ) for declaratives whPred1 ::{ when, how, why } whPred2 ::{ who, what, which} VariableList ::To consider format List of all variables / labels used in the DRS Possibly segmented into different categories for ease of extraction. Currently we only generate variables vn in the variable lists. LabeledPropositions ::LabeledProposition | LabeledProposition , LabeledPropositions | DRS ⇒ DRS (last production may have undesirable consequences, permitting LocationalPropositions, CollectivePropositions, ReferentRelationProposition, TemporalRelationProposition as antecedents possibly need to subcategorise.) LabeledProposition ::QuantifiedVariable | EventProposition | StateProposition | TemporalProposition | LocationalProposition | Deliverable Number: D5-1 Interface Definitions 23 of 69 SymbolicLocationalProposition | OtherPrepProposition | NominalAttributiveProposition | AttributiveProposition | CollectiveProposition | NumericalQuantifiedDefinition | ReferentRelationProposition | TemporalRelationProposition EventProposition ::Evar : Proposition StateProposition ::Svar : Proposition TemporalPropositon ::Tvar : time ( [Evar | Svar] ) | Tvar : TemporalPredicate ( Evar,Var ) (Tvar denotes an interval) TemporalPredicate ::in | on LocationalProposition ::Lvar : LocationalPredicate( Evar,Var ) | Lvar : DireactionalPredicate( Evar,Var ) DirectionalPredicate ::to | from | into | out_of LocationalPredicate ::within | on | above | etc. SymbolicLocationalProposition SLVar : LocationalPredicate ( Evar, Var ) /* following Hungarian nomenclature, these are for pseudo locational uses of prepositional phrases, ‘in a hat’, ‘in a good humour’, ‘on the phone’ – essentially a place holder for further thought. */ OtherPrepProposition PVar : OtherPrepPredicate ( Evar, Var ) LocationalPredicate ::for | by | with | without Deliverable Number: D5-1 Interface Definitions 24 of 69 AttributiveProposition ::Avar : Proposition NominalAttributiveProposition ::Nvar : Proposition CollectiveDefinition ::Cvar = count( Var ) | Cvar = count( Var , drslabel) /* Use of the box notation, potentially requires DRSs to be labeled for restricting predicates for plurals – see Kamp & Reyle (1993:343). requires further thought. */ #Cvar > 1 NumericalQuantifiedDefinition ::Cvar = count( Var ) | Cvar = count( Var , drslabel) /* Use of the box notation, potentially requires DRSs to be labeled for restricting predicates for plurals – see Kamp & Reyle (1993:343). requires further thought. */ #Cvar = N ReferentRelationProposition ::Var = Var | Var ∈ CVar TemporalRelationProposition ::Tvar TemporalOperator Tvar TemporalOperator ::= | < | > | ⊆ | ⊇ (temporal precedence and inclusion) (possibly others for completeness) Proposition ::Predicate ( VarList ) QuantifiedVariable ::UniversalQuantification | ExistentialQuantification UniversalQuantification::Qvar : ∀ Var ExistentialQuantification::Qvar : ∃ Var Predicate ::PredName . Sense PredName ::A lexical base form Sense ::An item identifying the sense of the lexical item and the specific sense according to that source (e.g. Wordnet.3) Deliverable Number: D5-1 Interface Definitions 25 of 69 VarList ::Var | Var , VarList Where Var ∈ {v1,v2,v3, ….. } Evar ∈ {e1,e2,e3, ….. } Svar ∈ {s1,s2,s3, ….. } Tvar ∈ {t1,t2,t3, ….. } Lvar ∈ {l1,l2,l3, ….. } Qvar ∈ {q1,q2,q3, ….. } Nvar ∈ {a1,a2,a3, ….. } Pvar ∈ {p1,p2,p3, ….. } Avar ∈ {attr1,attr2,attr3, ….. } Cvar ∈ {c1,c2,c3, ….. } drslabel ∈ {drs1,drs2,drs3, ….. } ImpVar ∈ {imp1,imp2,imp3, ….. } QuVar ∈ {qu1,qu2,qu3, ….. } CommVar ∈ {comm1,comm2,comm3, ….. } Ian Marshall, University of East Anglia, mailto:[email protected] Eva Safar, University of East Anglia, mailto:[email protected] Deliverable Number: D5-1 Interface Definitions 26 of 69 3 Encoding manual aspects of sign language: HamNoSys 4.0 3.1 Background The Hamburg Notation System (HamNoSys) is a well-established phonetic transcription system for sign language which comprises of more than 200 iconically motivated symbols. It was developed at the Institute of German Sign Language in the 1980s (first version published in 1987: Prillwitz et al. 1987) for the transcription of individual signs as well as sign utterances for research purposes. One of the design goals was to make it applicable for all sign languages. So far, HamNoSys has been used as a notational basis in a number of gesture research projects, e.g. Hofmann & Hommel (1997) and Fröhlich & Wachsmuth (1998). However, in the context of sign language generation, ViSiCAST is the first project to use HamNoSys for storing the phonetic form of individual signs in the lexicon and for combining signs into sign language utterances. This decision was taken early in the project design phase for several reasons: First, HamNoSys has proven quite stable over the last years and certainly is among the most frequently used notation systems within the sign linguistics community world-wide (cf. Miller to appear). Secondly, any notation that is too tightly connected to a specific phonological theory is too high a risk as a basis for a three-year project. In addition, most phonological theories have only been tested against ASL, not any of the European sign languages, and there has been no attempt to use such a system for more than one language. (In fact, targeting a phonological system to more than one language would presuppose that their phonological inventory is identical.) Most importantly, partners have employed native signers or have native signers associated with the project who are already familiar with HamNoSys. Therefore, native signers’ intuition about the correctness of utterances generated by the ViSiCAST translation system can be used before the animation machine (developed in other work-packages of the project) can render signs from notation faithfully enough to evaluate the generation as such. Finally, HamNoSys notations are sufficiently compact and easy to type in order to be used in the integrated editing environment (cf. scenario in chapter 1). As a consequence, one of the first milestones in the language & notation work-package was to further develop the kernel part of HamNoSys, i.e. the description of manual aspects, in order to fill minor gaps in previous versions as well as to satisfy some of the needs of using the system in a generation process. The major motivations for the changes include the aim for a more natural representation of signs, the possibility of underspecification in the notation, and the possibility of co-reference within a text. This often results in shorter HamNoSys strings. Backwards compatibility is required due to the large amount of HamNoSys transcriptions by various research groups. The following description assumes that the user has some familiarity with HamNoSys 2 (Prillwitz et al. 1989) and HamNoSys 3 (Hanke 2000). A self-contained documentation of HamNoSys 4 is in preparation. 3.2 Handshapes in HamNoSys 4.0 3.2.1 New bending operators Two new symbols are added to the already existing three bending operator symbols: F and E. The inventory of bending operators is as follows: base joint middle joint distal joint 4 max.ext. max.ext. max.ext. no bending (default) Deliverable Number: D5-1 Interface Definitions 27 of 69 4A max.bent max.ext. max.ext. bent 5B half-bent half-bent half-bent round 5C max.ext. max.bent max.bent hooked 5E max.bent max.bent max.ext. double-bent 5F max.bent max.bent max.bent double-hooked Note that all bending operators cover a certain range of bending. • Intermediate forms There will be no new symbol for intermediate forms such as 38ì3B8. All bending operators cover a certain range of bending. Likewise, a hand with extended fingers and no bending symbol can also be used to transcribe a hand with slightly bent fingers. If necessary, intermediate forms can still be notated with the in-between operator ì. How these symbols are applied: • Fist The E symbol makes the differentiation between the two different positions of the fingers in a fist possible, i.e.: 2 fist with the nails touching the palm = default (as before); and 2E the fist with the finger pads touching the palm (this almost always also requires some contact with the distal joint). This latter handshape could also be notated HamNoSys as 3E; it was decided, however, to use the fist symbol in order to show the close relationship between the two different handshapes. • Fingers [+ spread] The bending operator F for “fisting” (double-hooked) is used, for example, to differentiate between the two different productions of the “Y”-handshape: 48ê where the little finger is [spread] (i.e. position as in 58èê); and 68èêè F where the little finger is [+ spread] (i.e. position as in 68èê). 6 Max.ext. = joint maximally extended; max.bent = joint maximally bent; half-bent = joint approx. 45° bent Deliverable Number: D5-1 Interface Definitions 28 of 69 This bending operator will only be used in cases where really necessary (see, e.g., above): transcriptions such as 7CéFèF should be avoided if handshapes can be transcribed in an easier and shorter way, in this case 5Cçê. • Closed thumb-finger combinations For closed thumb-finger combinations, the following bending operator symbols can be used: symbols bending of selected fingers place/joint where selected fingers contact e.o.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

ViSiCAST Deliverable D 1 - 2 : Advanced Sign Transmission

The deliverable gives a focus on how to transport different formats of ViSiCAST data via the broadcast channels. Furthermore the deliverable will describe different implementations of the Multimedia Home Platform MHP and the implications for ViSiCAST. Some results gained in subjective tests are given in a thesis related to workpackage 1. At the end of the deliverable some impressions of the boo...

متن کامل

ViSiCAST Deliverable D 1 - 1 : Direct Sign Transmission ( Demonstrator )

of: " Interim Transmission Demonstrator " by W.Brückner On December 21, 2000 ViSiCAST motion data (compressed mask-vr data made available by the project partners Televirtual and UEA) were successfully transmitted on IRT's laboratory broadcast equipment, using between 5 and 25 frames of 430 bytes per second. That data were transmitted within the DVB Transport Stream in conjunction with other,-in...

متن کامل

Mobile Metropolitan Ad hoc Networks

The aim of this deliverable is to provide the software that implements, on the Linux operating system, the functions required to set up a MobileMAN. Specifically, in addition to the revised version of the software modules already delivered in D11 -i) CORE watchdog mechanism, ii) ad hoc routing framework, iii) p2p Pastry platform, and iv) VoIP and whiteboard applications -we now also deliver: th...

متن کامل

Fish4Knowledge Deliverable D2.3 Component-based prototypes and evaluation criteria

We provide an overview of the various user interface components developed and evaluated in the Fish4Knowledge project, and describe the development plans for future components and their evaluation criteria. Deliverable due: Month 18 Version 1.1; 2012-11-8 Page 1 of 49 c © Fish4Knowledge Consortium, 2012 IST – 257024 – Fish4Knowledge Deliverable D2.3

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002